Anthropic’s Top Safety Researcher Resigns with Cryptic Warning About Global Peril
Mrinank Sharma, head of Anthropic's Safeguards Research Team, abruptly resigned from the AI firm behind Claude—a leading competitor to OpenAI's ChatGPT. His departure follows just 16 months after launching the safety team tasked with mitigating risks from deployed AI systems.
The resignation letter contained an ominous warning: "The world is in peril." Sharma attributed this not solely to AI or bioweapons, but to "interconnected crises" unfolding now. This has sparked intense speculation about unaddressed existential risks in AI development.
Sharma's exit accelerates a troubling trend of high-profile departures across AI companies. His letter emphasized principled decision-making, hinting at potential ethical concerns behind his resignation during a critical phase for AI governance.